Current Issue : January - March Volume : 2018 Issue Number : 1 Articles : 5 Articles
Alongside the best-known applications of brain-computer interface (BCI) technology for restoring communication abilities and\ncontrolling external devices, we present the state of the art of BCI use for cognitive assessment and training purposes. We first\ndescribe some preliminary attempts to develop verbal-motor free BCI-based tests for evaluating specific or multiple cognitive\ndomains in patients with Amyotrophic Lateral Sclerosis, disorders of consciousness, and other neurological diseases. Then we\npresent the more heterogeneous and advanced field of BCI-based cognitive training, which has its roots in the context of\nneurofeedback therapy and addresses patients with neurological developmental disorders (autism spectrumdisorder and attentiondeficit/\nhyperactivity disorder), stroke patients, and elderly subjects. We discuss some advantages of BCI for both assessment and\ntraining purposes, the former concerning the possibility of longitudinally and reliably evaluating cognitive functions in patients\nwith severe motor disabilities, the latter regarding the possibility of enhancing patients� motivation and engagement for improving\nneural plasticity. Finally, we discuss some present and future challenges in the BCI use for the described purposes....
Affective computing in general and human activity and intention analysis in particular\ncomprise a rapidly-growing field of research. Head pose and emotion changes present serious\nchallenges when applied to player�s training and ludology experience in serious games, or analysis of\ncustomer satisfaction regarding broadcast and web services, or monitoring a driver�s attention.\nGiven the increasing prominence and utility of depth sensors, it is now feasible to perform\nlarge-scale collection of three-dimensional (3D) data for subsequent analysis. Discriminative random\nregression forests were selected in order to rapidly and accurately estimate head pose changes in an\nunconstrained environment. In order to complete the secondary process of recognising four universal\ndominant facial expressions (happiness, anger, sadness and surprise), emotion recognition via facial\nexpressions (ERFE) was adopted. After that, a lightweight data exchange format (JavaScript Object\nNotation (JSON)) is employed, in order to manipulate the data extracted from the two aforementioned\nsettings. Motivated by the need to generate comprehensible visual representations from different sets\nof data, in this paper, we introduce a system capable of monitoring human activity through head\npose and emotion changes, utilising an affordable 3D sensing technology (Microsoft Kinect sensor)....
Numerous EEG-based brain-computer interface (BCI) systems that are being developed focus on novel feature extraction\nalgorithms, classification methods and combining existing approaches to create hybrid BCIs. Several recent studies demonstrated\nvarious advantages of hybrid BCI systems in terms of an improved accuracy or number of commands available for the user. But\nstill, BCI systems are far from realization for daily use. Having high performance with less number of channels is one of the\nchallenging issues that persists, especially with hybrid BCI systems, where multiple channels are necessary to record information\nfrom two or more EEG signal components. Therefore, this work proposes a single-channel (C3 or C4) hybrid BCI system that\ncombines motor imagery (MI) and steady-state visually evoked potential (SSVEP) approaches. This study demonstrates that\nbesides MI features, SSVEP features can also be captured from C3 or C4 channel. The results show that due to rich feature\ninformation (MI and SSVEP) at these channels, the proposed hybrid BCI system outperforms both MI- and SSVEP-based\nsystems having an average classification accuracy of 85.6 �± 7.7% in a two-class task....
Projectors have become a widespread tool to share information in Human-Robot Interaction with large groups of people in a\ncomfortable way. Finding a suitable vertical surface becomes a problem when the projector changes positions when a mobile robot\nis looking for suitable surfaces to project. Two problems must be addressed to achieve a correct undistorted image: (i) finding the\nbiggest suitable surface free fromobstacles and (ii) adapting the output image to correct the distortion due to the angle between the\nrobot and a nonorthogonal surface.We propose a RANSAC-based method that detects a vertical plane inside a point cloud. Then,\ninside this plane, we apply a rectangle-fitting algorithm over the region in which the projector can work. Finally, the algorithm\nchecks the surface looking for imperfections and occlusions and transforms the original image using a homography matrix to\ndisplay it over the area detected. The proposed solution can detect projection areas in real-time using a single Kinect camera, which\nmakes it suitable for applications where a robot interacts with other people in unknown environments. Our Projection Surfaces\nDetector and the Image Correction module allow a mobile robot to find the right surface and display images without deformation,\nimproving its ability to interact with people....
Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities\nto naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has\nshown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motorimagery\ntasks, limiting the number of available commands. In thiswork,we present the results of the first four-classmotor-imagerybased\nonline fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand,\nright hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move\nbackward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found\nbetween the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid\nrobot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second\nBCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class\nmotor-imagery-based fNIRS-BCI could be feasible with sufficient subject training....
Loading....